Unifying Stochastic Convex Optimization and Active Learning
نویسندگان
چکیده
First order stochastic convex optimization is an extremely well-studied area with a rich history of over a century of optimization research. Active learning is a relatively newer discipline that grew independently of the former, gaining popularity in the learning community over the last few decades due to its promising improvements over passive learning. Over the last year, we have uncovered concrete theoretical and algorithmic connections between these two fields, due to their inherently sequential nature and decision-making based on feedback of earlier choices, that have yielded new methods and proofs techniques in both fields. Here, we summarize the foundations of these connections and summarize our recent advances, with special focus on the implications for stochastic optimization. Specifically, we get an interesting lower bound technique that is quite transparent and are simultaneously tight for derivative-free optimization and first-order optimization for both point and function error. We show that a randomized coordinate descent algorithm with an active learning line search can achieve minimax optimal rates while being adaptive to unknown uniform convexity parameters. This procedure only relies on unidirectional noisy gradient signs as opposed to real valued gradient vectors as a result, rounding errors and other errors that preserve the sign of the gradient lead to deterministic (non-stochastic) rates of convergence.
منابع مشابه
Algorithmic Connections between Active Learning and Stochastic Convex Optimization
Interesting theoretical associations have been established by recent papers between the fields of active learning and stochastic convex optimization due to the common role of feedback in sequential querying mechanisms. In this paper, we continue this thread in two parts by exploiting these relations for the first time to yield novel algorithms in both fields, further motivating the study of the...
متن کاملActive Learning as Non-Convex Optimization
We propose a new view of active learning algorithms as optimization. We show that many online active learning algorithms can be viewed as stochastic gradient descent on non-convex objective functions. Variations of some of these algorithms and objective functions have been previously proposed without noting this connection. We also point out a connection between the standard min-margin offline ...
متن کاملLearning From An Optimization Viewpoint
Optimization has always played a central role in machine learning and advances in the field of optimization and mathematical programming have greatly influenced machine learning models. However the connection between optimization and learning is much deeper : one can phrase statistical and online learning problems directly as corresponding optimization problems. In this dissertation I take this...
متن کاملComposite Objective Mirror Descent
We present a new method for regularized convex optimization and analyze it under both online and stochastic optimization settings. In addition to unifying previously known firstorder algorithms, such as the projected gradient method, mirror descent, and forwardbackward splitting, our method yields new analysis and algorithms. We also derive specific instantiations of our method for commonly use...
متن کاملOptimal rates for stochastic convex optimization under Tsybakov noise condition
We focus on the problem of minimizing a convex function f over a convex set S given T queries to a stochastic first order oracle. We argue that the complexity of convex minimization is only determined by the rate of growth of the function around its minimizer xf,S , as quantified by a Tsybakov-like noise condition. Specifically, we prove that if f grows at least as fast as ‖x − xf,S‖ around its...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
دوره شماره
صفحات -
تاریخ انتشار 2013